![]() SYSTEM AND METHOD OF INTERACTION IN VIRTUAL ENVIRONMENTS USING HOPTIC DEVICES (Machine-translation b
专利摘要:
System and method of interaction in virtual environments. The method comprises: detecting (104, 105) a zoom order of an initial virtual scene (212); generating (108), on the part of a graphic processing unit (206), a new virtual scene (216) from the initial virtual scene (212) with a modified magnification level depending on the zoom order detected; mapping (110) the working space of a haptic device (204) with the space representing the new virtual scene (216); and represent (112) the new virtual scene (216). The zoom order can be performed by pressing a button (205) of the haptic device (204) or by voice command. The zoom order is an instruction that allows the expansion level of the initial virtual scene (212) to be progressively modified or the expansion level of the initial virtual scene (212) modified according to a certain predefined value. (Machine-translation by Google Translate, not legally binding) 公开号:ES2716012A1 申请号:ES201830941 申请日:2018-09-28 公开日:2019-06-07 发明作者:Llamas Camino Fernández;Costales Gonzalo Esteban;Fernández Alexis Gutiérrez 申请人:Universidad de Leon; IPC主号:
专利说明:
[0001] [0002] [0003] FIELD OF THE INVENTION [0004] The object of the invention is framed in the field of computer science, more specifically in that of haptic devices and virtual simulation. [0005] [0006] BACKGROUND OF THE INVENTION [0007] The techniques designed to control the interaction by haptic devices can be classified into three groups according to their mode of operation: based on position control, based on speed control and hybrid techniques. In the first place, techniques based on position control use the position of the end effector of the haptic device -in the physical work space- to, consequently, locate its virtual avatar. Secondly, the techniques based on speed control displace the virtual avatar in accordance with the deviation of the end effector and with respect to the center of the workspace, as if it were a joystick. Finally, hybrid techniques combine position control and speed control in one, adding some mechanism that allows exchange. [0008] [0009] Within the techniques of interaction based on position control are clutching [1] and direct or scaled mapping [2]. Clutching is an adaptation to the haptic devices of the technique used in desktop mice. It consists of lifting the mouse and repositioning it in a more comfortable position when with a single movement it is not possible to reach an area of interest. Applied to haptic devices, it consists of decoupling the movement of the end effector from the movement of the virtual avatar while pressing, for example, a button. In this way, it is possible to repeatedly perform such maneuver and thus reach any part of the virtual scene with a haptic device that has a limited workspace. The direct mapping or scaling, makes a correspondence between the workspace of the haptic device and the entire virtual scene, establishing a scale factor which implies that in large scenes a small movement of the haptic device produces a great movement in the virtual avatar, seriously penalizing accuracy. [0010] Another technique based on position control is the one presented by Conti & Khatib [3], consisting in continuously repositioning the final effector towards the center of the workspace without updating the visual feedback. In other words, while interacting with a virtual object, the end effector gradually moves - and with smooth movements - towards the center of the physical workspace, without affecting in any way its visual aspect, and therefore , without the user noticing. [0011] [0012] Hybrid techniques comprise two modes of operation: one based on position control and the other on speed control. These techniques focus on solving the problem of mode change, as is the case of the proposal of Liu et al. [4]. In it, it is proposed to maintain a neutral area around the center of the working space of the haptic device. Thus, when executing the mode change, the position desired by the user is not disturbed, since said change is associated with the pressing of a button- which may affect an unwanted movement of the end effector. [0013] [0014] Finally, the so-called bubble technique is also framed within the hybrid interaction techniques and consists in virtually dividing the workspace of the haptic device into two sections: an inner bubble and the rest of the outer work area. A semi-transparent bubble is visually represented in the virtual environment and, within the bubble, good precision is achieved thanks to a 1: 1 mapping; while to reach any part of the environment it is necessary to move the bubble. To do this, making force with the virtual avatar out of it, it will move in the direction of the avatar, at greater or lesser speed depending on the applied magnitude [5]. [0015] [0016] Bibliographic references [0017] [1] Dominjon, L., Perret, J., & Lécuyer, A. (2007). Novel devices and interaction techniques for human-scale haptics. The Visual Computer, 23 (4), 257-266. [0018] [0019] [2] Hirzinger, G., Brunner, B., Dietrich, J., & Heindl, J. (1993). Sensor-based space robotics-ROTEX and its telerobotic features. IEEE Transactions on robotics and automation, 9 (5), 649 663. [0020] [0021] [3] Conti, F., & Khatib, O. (2005, March). Spanning large workspaces using small haptic devices. In Eurohaptics Conference, 2005 and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2005. World Haptics 2005. First Joint (pp. 183-188). IEEE. [0022] [0023] [4] Liu, L., Liu, G., Zhang, Y., & Wang, D. (2014, August). A modified motion mapping method for haptic device based space teleoperation. In Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on (pp. 449-453). IEEE. [0024] [0025] [5] Dominjon, L., Lecuyer, A., Burkhardt, J.M., Andrade-Barroso, G., & Richir, S. (2005, March). The "Bubble" technique: interacting with large virtual environments using haptic devices with limited workspace. In Eurohaptics Conference, 2005 and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2005. World Haptics 2005. First Joint (pp. 639-640). IEEE. [0026] [0027] DESCRIPTION OF THE INVENTION [0028] The invention relates to a method and system of interaction in virtual environments using haptic devices. The invention proposes a new interaction model for haptic devices that solves the main problem of them, their limited work space, allowing to use this type of devices with a limited work area as main elements of interaction, achieving a high level of mobility and precision. [0029] [0030] The present invention allows the use of haptic devices for a natural and precise interaction with virtual scenes. The interaction model is based on the realization of enlargements on an initial virtual scene using a haptic device (so the magnification performed can be called "haptic zoom"), which allows the operator to perform a first phase of approach to the area of interest of the scene to later activate the haptic zoom and have a greater precision. Therefore, given a virtual scene in which a haptic device is used, the haptic zoom consists of performing one or more enlargements of it to achieve a higher level of haptic precision. [0031] [0032] In each of the enlarged scenes, the work space available for the virtual avatar of the haptic device is directly mapped with the space representing the virtual scene, thus allowing the avatar to reach any place in the scene. If the user requires a higher level of precision on any part of the scene, the haptic zoom is activated. This fact affects the realization of a visual zoom on the area of interest, giving rise to a second scene. With the new scene already determined, it is mapped again the workspace of the haptic device with the space that represents the new scene, once again allowing the virtual avatar to reach any area. [0033] [0034] The method of interaction in virtual environments according to the present invention comprises the following stages: [0035] [0036] - Detect a zoom order of an initial virtual scene. [0037] [0038] - Generate, on the part of a graphic processing unit, a new virtual scene from the initial virtual scene with a modified magnification level based on the zoom order detected. [0039] [0040] - Mapping the workspace of a haptic device with the space that represents the new virtual scene. [0041] [0042] - Represent the new virtual scene in a visualization device, preferably in a virtual reality viewer (although it could also be represented on a screen). [0043] [0044] The zoom order can be an instruction that allows to progressively modify the level of expansion of the initial virtual scene or to modify the level of expansion of the initial virtual scene according to a certain predefined value. [0045] [0046] The detection of the zoom order can be performed by the haptic device itself, preferably by detecting the pressing of a button on the haptic device. In this case, the method comprises sending the zoom order detected to the graphic processing unit. Additionally, or alternatively, detection of the zoom order can be carried out by recognizing a voice command. For this case, the detection can be performed directly by the graphic processing unit by analyzing the signal captured by a microphone. [0047] [0048] A second aspect of the present invention relates to an interaction system in virtual environments that implements the method described above. The system comprises a haptic device for interacting with a virtual environment, a graphic processing unit responsible for generating scenes from the virtual environment, and a display device for representing the virtual scenes generated. The graphic processing unit is configured to receive a zoom command of an initial virtual scene; generate, from the initial virtual scene, a new virtual scene with a modified level of expansion in function of the received zoom order; and mapping the work space of the haptic device with the space that represents the new virtual scene. [0049] [0050] There are also different ways to apply the haptic zoom taking into account the location of the extension and the virtual avatar. [0051] • Zoom centered on the virtual avatar. In this zoom mode the position of the virtual avatar will be taken as the center of the scene after enlargement. This zoom mode involves the physical repositioning of the end effector of the haptic device to the center of the workspace so that it matches the on-screen representation of it. When performing an enlargement of the scene centered on the virtual avatar (haptic pointer), in the resulting scene the virtual avatar will be at the center of it. To match the scene with the physical device (effector), it must be repositioned automatically in the center of the workspace, so that both the virtual avatar is in the center of the scene and the effector is in the center of the scene. work space. This correspondence will allow any point of the virtual scene to be reached with the movements allowed by the (limited) workspace. Automatic repositioning is carried out by applying forces from the haptic device to the center of your workspace. [0052] [0053] • Zoom keeping the proportions. In this zoom mode, an increase is made in the scene, keeping the virtual avatar at the same distances as a percentage of the edges of the scene, so a physical repositioning of the final effector will not be necessary. [0054] [0055] The present invention solves a problem inherent to haptic devices, their small workspace. Applying the model described by the invention in haptic devices allows them to be used in a more natural and precise manner as main interaction devices in different types of simulations from different technical fields (medical sector, automotive industry, aerospace industry, etc.). [0056] [0057] BRIEF DESCRIPTION OF THE DRAWINGS [0058] Next, a series of figures that help to better understand the invention and that are expressly related to an embodiment of said invention that is presented as a non-limiting example thereof, are described very briefly. [0059] [0060] Figures 1A and 1B represent a flow diagram of the interaction method in environments virtual devices according to two possible embodiments of the present invention. [0061] [0062] Figures 2A and 2B show, according to a possible embodiment, the interaction system in virtual environments in operation, when the user performs a haptic zoom. [0063] [0064] Figure 3 represents the relationship between the images per second (frames) and the zoom level applied. [0065] [0066] PREFERRED EMBODIMENT OF THE INVENTION [0067] The present invention relates to a method and system of interaction in virtual environments using haptic devices. [0068] [0069] Figure 1A represents a main scheme of the interaction method in virtual environments according to the present invention, through which a zoom is made (either zooming in or zooming out of the image) using a haptic device. This type of zoom enabled by a haptic device can be called "haptic zoom". [0070] [0071] A user performs an action that generates a zoom order 102 to enlarge or reduce an initial virtual scene, which is the image shown to the user through a virtual reality viewer. The action may consist, for example, in the pressing of a button on a haptic device. The zoom command is detected 104 by the haptic device, which sends 106 the detected zoom order to a graphic processing unit. [0072] [0073] Next, the graphic processing unit generates a new virtual scene from the initial virtual scene with a modified magnification level according to the received zoom order, and maps the working space of the haptic device 110 to the space that represents the new virtual scene. Finally, the new virtual scene is represented 112 in a virtual reality viewer (which will normally be carried by the same user). [0074] [0075] The zoom order includes at least the information indicative of whether to enlarge or reduce the initial image. In addition, the zoom order may contain information on the zoom level to be applied to the initial virtual scene, although said information is preferably prefixed in the graphic processing unit (eg by a variable stored in memory and modifiable by the user using an application). ). The zoom order can also include information about whether it is a point zoom or a continuous or progressive zoom that must be applied to the initial virtual scene. In the case of progressive zoom, the command is preferably sent continuously as long as the haptic device detects the user's performance (eg while the user presses a button). In this way, the graphic processing unit will generate, according to a certain refresh rate, successive images (ie "frames") enlarged or reduced from the initial virtual scene, according to a level of enlargement between determined successive images, until it leaves to receive the zoom order. Due to the use of a high refresh rate (eg 30, 60 or 90 frames per second) and the accelerated counting of the successive images, the user fluidly perceives a zoom of the initial virtual scene while performing the zoom order action (eg while holding a button pressed). [0076] [0077] Figure 1B represents a scheme of the interaction method in virtual environments according to another possible embodiment. This embodiment differs from the previous embodiment in the zoom order detection mode. In the case of Figure 1A, detection 104 is performed by the haptic device, by detecting the pressing of a button on the haptic device. In the embodiment of Figure 1B the detection 105 is performed by the graphic processing unit by detecting a voice command of the user captured by a microphone. [0078] [0079] Figure 2A shows the different elements of the interaction system in virtual environments according to a possible embodiment of the present invention. In particular, the system comprises a haptic device 204 for the user to interact with a virtual environment, a graphic processing unit 206 responsible for generating scenes from the virtual environment, and a display device 208 for representing the virtual scenes generated. According to the embodiment shown in Figure 2A, the display device is a virtual reality viewer 208, although it can also be a normal screen (eg a monitor, a television). [0080] [0081] According to the embodiment of Figure 2A, the graphic processing unit 206 is a separate entity independent of the virtual reality viewer 208. Such a graphic processing unit may be implemented, for example, by a computer connected by cable 210 (or by connection). wireless at high frequency according to the recent technologies, eg WiGig) to the virtual reality viewer 208. However, in another embodiment (not shown in the figures) the graphic processing unit 206 and the virtual reality viewer 208 may be the same entity , that is, the graphic processing unit 206 can be integrated into the autonomous virtual reality viewer 208 (eg Oculus Go). In another embodiment, the graphic processing unit 206 and the display device may be implemented in a smart mobile telephone that is coupled to a support (eg, Samsung Gear VR virtual reality helmet) for its correct use. [0082] [0083] The haptic device 204 is configured to detect a zoom order to enlarge or reduce an initial virtual scene 212, represented by way of example in the figure. The zoom command can be detected by the haptic device 204 by detecting the pulsation, performed by a user 202, of a button 205 of the haptic device 204. The haptic device 204 sends 106, preferably wirelessly, the zoom order detected (and conveniently processed) to the graphic processing unit 206. [0084] [0085] As shown in Figure 2B , the graphic processing unit 206 generates, from the initial virtual scene 212, a new virtual scene 216 with a modified magnification level as a function of the received zoom order (in the example, the zoom order is enlargement, but it could be in its reduction place). As indicated above, the level of enlargement or reduction to be applied (eg 1.5x, 2x, 4x, ...) may be predetermined by the graphic processing unit 206 (eg a data stored in memory) or it may be a data included in the zoom order itself sent 106. The zoom to be made can be specific or progressive, depending on the zoom order sent 106 (for example, the zoom order can include a field that determines the type of zoom to apply, if timely or progressive). Thereafter, the graphic processing unit 206 maps the work space of the haptic device 204 (the spatial coordinates where the haptic device moves) with the space representing the new virtual scene 216. [0086] [0087] According to one embodiment, the graphic processing unit 206 may be configured to zoom the initial virtual scene 212 centered on a virtual avatar, relocating the haptic device 204 to the center of the workspace. [0088] [0089] In another embodiment, a zoom is applied in which proportions are maintained from a virtual avatar to the edges of the scene. In this embodiment, the graphic processing unit 206 zooms in the initial virtual scene maintaining a virtual avatar 214 (represented with a cross) at the same distance (Dx, Dy) from the edges of the virtual scene, as represents in the example of Figures 2A and 2B, where the dashed lines represent the percentage distance from the avatar 214 to the edges of the scene. Behind the activation of the haptic zoom, it is checked how the avatar 214 is kept at the same distance as a percentage of the edges of the scene, which represents that both the final effector (ie the physical device that the user handles) and the pointer haptic (ie the representation of the end effector in the virtual environment) of the haptic device maintains its position. [0090] [0091] Depending on the zoom order received or the configuration of the graphic processing unit 206, the haptic zoom executed may be a point zoom, so that each pressing of the button 205 of the haptic device 204 involves zooming the scene with a certain predefined value, or a continuous zoom, where pressing and holding the button 205 of the haptic device 204 will apply a progressive zoom on the scene until the user stops pressing the button, at which time the level is considered reached of desired expansion. [0092] [0093] Figure 3 shows, for the execution of a continuous zoom, the calculation of the magnification level on the original scene that each trame must have for 1 second, both at 30 trams per second (top of Figure 3) and at 60 trams per second (bottom of Figure 3). [0094] [0095] Given an initial scene eo and a position of the virtual avatar p measured in percentage at the edges of the scene, the haptic zoom is activated when the user presses one of the buttons 205 of the haptic device 204. To achieve an acceptable level of fluency in At least 30 small magnifications of the zoom are made in each second, with 60 the number of updates per second desired. Starting from the initial scene eo with a magnification value of 1.0 and a refresh rate of 30 enlargements per second, the value that the zoom must take in the subsequent scene, scene ei, is: [0096] [0097] [0098] [0099] [0100] The general formula for calculating the zoom level for a generic scene e, from the previous scene ei, is: [0101] [0102] [0103] In the previous formula a is a constant and fps is the refresh rate (in frames per second). In a preferred embodiment a = 0.3. By applying this formula consecutively for 30 iterations (one second) on the initial stage, with initial zoom level of 1.0 (zoom (e0) = 1.0), a level of end zoom 1.3478 is obtained at the initial stage and or in a second you will have experienced an enlargement level of 34.78% ( zoom (e30) = 1.3478 ). For the 60 fps example, the final zoom is 34.88% ( zoom (e60) = 1.3488 ). [0104] [0105] Once calculated the zoom setting of each scene and, the zoom level of the previous scene and apply - i, so that the resulting e scene keep the virtual avatar at the same distance as a percentage of the edges of the scene that the previous scene e - i . [0106] [0107] There are different ways to activate the haptic zoom taking into account the physical possibilities (e.g., buttons) of the available haptic device. Given the different availability of buttons in the different haptic desktop devices present in the market, haptic zoom activation (either in point zoom mode or continuous zoom mode) can be done in several ways: [0108] - Haptic devices with a single button: [0109] • Spot zoom (constant zoom levels): Each press of the button of the haptic device entails zooming the scene with a certain predefined value. Undo a zoom level can be carried out with a double quick press of the button. [0110] [0111] • Continuous zoom: Keeping the button of the haptic device pressed, a progressive zoom will be applied to the scene until the user stops pressing the button, at which point the desired level of magnification is considered reached. The action that will activate the back zoom will be a double press of the button of the haptic device keeping the button pressed after the second press. [0112] [0113] - Haptic devices with at least two buttons: [0114] • Spot zoom (constant zoom levels): Each press of one of the buttons of the haptic device will entail zooming in the scene with a certain predefined value. Undo a zoom level can be done with a click on the other of the buttons. [0115] [0116] • Continuous zoom: Keeping one of the buttons on the haptic device pressed, a progressive zoom will be applied on the scene until the user stops pressing the button, which will be when the enlargement level has been reached. wanted. The action that will activate the reverse zoom will be to hold down the other button of the haptic device. [0117] [0118] - Haptic devices without buttons: Through voice commands the user can apply, both constantly and continuously, both increases and decreases at the zoom level. The voice commands are captured through a microphone connected to the graphic processing unit 206 (e.g. a computer or the virtual reality viewer itself) responsible for representing the virtual environment. The graphic processing unit 206 receives, analyzes and executes the voice commands, applying the corresponding action on the virtual scene (i.e. activating the haptic zoom in the virtual representation). [0119] [0120] The most appropriate haptic zoom implementation is one in which a desktop haptic device with at least two buttons is used as an interaction method applying a continuous zoom in which proportions are kept from the avatar to the edges of the scene.
权利要求:
Claims (21) [1] 1. Method of interaction in virtual environments, characterized by comprising: detecting (104, 105) a zoom command of an initial virtual scene (212); generating (108), on the part of a graphic processing unit (206), a new virtual scene (216) from the initial virtual scene (212) with a modified magnification level depending on the zoom order detected; mapping (110) the working space of a haptic device (204) with the space representing the new virtual scene (216); represent (112) the new virtual scene (216). [2] Method according to claim 1, characterized in that the zoom order is an instruction for modifying the expansion level of the initial virtual scene (212) according to a specific predefined value. [3] Method according to claim 1, characterized in that the zoom order is an instruction for progressively modifying the expansion level of the initial virtual scene (212). [4] Method according to any of the previous claims, characterized in that the detection (104) of the zoom order is carried out by the haptic device (204); wherein the method comprises sending (106) the detected zoom order to the graphic processing unit (206). [5] Method according to claim 4, characterized in that the detection (104) of the zoom order comprises the detection of the pressing of a button (205) of the haptic device (204). [6] Method according to any of claims 1 to 3, characterized in that the detection (105) of the zoom order comprises the detection of a voice command. [7] Method according to any of claims 1 to 6, characterized in that the zoom performed in the initial virtual scene (212) is centered on a virtual avatar (214), where the method comprises the relocation of the haptic device towards the center of the space of work. [8] Method according to any of claims 1 to 6, characterized in that the zoom performed in the initial virtual scene (212) maintains a virtual avatar (214) at the same distance as a percentage of the edges of the virtual scene. [9] Method according to any of the preceding claims, characterized in that the new virtual scene (216) is represented (112) in a virtual reality viewer (208). [10] Method according to any of the previous claims, characterized in that the new virtual scene (216) is represented (112) on a screen. [11] 11. Interaction system in virtual environments, comprising: a haptic device (204) for interacting with a virtual environment, a graphic processing unit (206) responsible for generating scenes from the virtual environment, a display device to represent the virtual scenes generated, characterized in that the graphic processing unit (206) is configured to: receive a zoom command of an initial virtual scene (212); generating, from the initial virtual scene (212), a new virtual scene (216) with a modified magnification level depending on the received zoom order; mapping the workspace of the haptic device (204) with the space representing the new virtual scene (216). [12] System according to claim 11, characterized in that the zoom order is an instruction for modifying the expansion level of the initial virtual scene (212), wherein the graphic processing unit (206) is configured to enlarge or reduce, depending on the zoom order, the initial virtual scene (212) with a predefined magnification level. [13] System according to claim 11, characterized in that the zoom order is an instruction for progressively modifying the magnification level of the initial virtual scene wherein the graphic processing unit (206) is configured to progressively enlarge or reduce, depending on the zoom order, the initial virtual scene (212) according to a given refresh rate. [14] System according to claim 13, characterized in that the graphic processing unit (206) is configured to calculate the zoom level applied to a scene and to start from the zoom level applied in a previous scene and i_1 according to the following formula: ) = zoom (e ax zo - o - m oom (e i i_1 ) ----------- --- ( - e - i - _ - one - ) z - where zoom (e) is the zoom level of the scene e, zoom (ei-i) is the zoom level of the previous scene ei, a is a constant, fps is the refresh rate. [15] System according to any of claims 11 to 14, characterized in that the haptic device (204) is configured to detect (104) a zoom command of an initial virtual scene (212) and send (106) the zoom order to the graphic processing unit (206). [16] System according to claim 15, characterized in that the haptic device (204) is configured to detect (104) the zoom order by detecting the pressing of a button (205) of the haptic device (204). [17] System according to any of claims 11 to 14, characterized in that the graphic processing unit (206) is configured to detect (105) the zoom order by detecting a voice command captured by a microphone. [18] 18. Systems according to any of claims 11 to 17, characterized in that the graphic processing unit (206) is configured to: zoom in on the initial virtual scene centered on a virtual avatar (214), and relocate the haptic device (204) to the center of the workspace. [19] System according to any of claims 11 to 17, characterized in that the graphic processing unit (206) is configured to zoom in on the initial virtual scene by keeping a virtual avatar (214) at the same distance as a percentage of the edges of the virtual scene. [20] System according to any of claims 11 to 19, characterized in that the display device is a virtual reality viewer (208). [21] System according to any of claims 11 to 19, characterized in that the display device is a screen.
类似技术:
公开号 | 公开日 | 专利标题 US10384348B2|2019-08-20|Robot apparatus, method for controlling the same, and computer program EP2755194B1|2017-01-04|3d virtual training system and method US9075444B2|2015-07-07|Information input apparatus, information input method, and computer program KR101546654B1|2015-08-25|Method and apparatus for providing augmented reality service in wearable computing environment ES2662527T3|2018-04-06|Virtual machine tool for the representation of actions of machining units and the generation of operational data from user inputs KR20170102486A|2017-09-11|Method for generating robot operation program, and device for generating robot operation program JP2014203463A5|2017-05-18| US9478058B2|2016-10-25|Object correcting apparatus and method and computer-readable recording medium JP6150429B2|2017-06-21|Robot control system, robot, output control program, and output control method CN104699249A|2015-06-10|Information processing method and electronic equipment CN108066008A|2018-05-25|Aid in the Medical Instruments control method and system of operation JP2020526831A|2020-08-31|Interactive input control in a simulated three-dimensional | environment ES2716012A1|2019-06-07|SYSTEM AND METHOD OF INTERACTION IN VIRTUAL ENVIRONMENTS USING HOPTIC DEVICES | JP2009258884A|2009-11-05|User interface CN109648568A|2019-04-19|Robot control method, system and storage medium CN108211350B|2021-06-04|Information processing method, electronic device, and storage medium WO2019023020A1|2019-01-31|Association processes and related systems for manipulators JP6174646B2|2017-08-02|Computer program for 3-axis operation of objects in virtual space JPWO2018074054A1|2019-08-08|Display control apparatus, display control method, and program JP2005527872A|2005-09-15|Method and apparatus for interacting with a three-dimensional computer model KR102170592B1|2020-10-27|Coordinate specification system and method based on reference point for robot's attitude definition KR102361985B1|2022-02-14|Method and system for wearable device-based manual providing JP2020034983A|2020-03-05|Information processing apparatus, action control program and action control method RU2718613C1|2020-04-09|Method of controlling devices with a large number of controlled elements using a "mouse" CN205889192U|2017-01-18|Intelligence indoor navigation robot
同族专利:
公开号 | 公开日 ES2716012B2|2020-07-22|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20110214087A1|2006-03-24|2011-09-01|Denso Corporation|Display apparatus and method of controlling same| US20140154655A1|2009-06-04|2014-06-05|Zimmer Dental, Inc.|Dental implant surgical training simulation system| WO2012037157A2|2010-09-13|2012-03-22|Alt Software Llc|System and method for displaying data having spatial coordinates| US20170287225A1|2016-03-31|2017-10-05|Magic Leap, Inc.|Interactions with 3d virtual objects using poses and multiple-dof controllers| US20180181199A1|2016-11-14|2018-06-28|Logitech Europe S.A.|Systems and methods for operating an input device in an augmented/virtual reality environment|ES2807674A1|2020-10-29|2021-02-23|Univ Leon|PROGRAM METHOD, SYSTEM AND PRODUCT FOR INTERACTION IN VIRTUAL REALITY ENVIRONMENTS THROUGH A HAPTIC DESKTOP FORCES FEEDBACK DEVICE |
法律状态:
2019-06-07| BA2A| Patent application published|Ref document number: 2716012 Country of ref document: ES Kind code of ref document: A1 Effective date: 20190607 | 2020-07-22| FG2A| Definitive protection|Ref document number: 2716012 Country of ref document: ES Kind code of ref document: B2 Effective date: 20200722 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 ES201830941A|ES2716012B2|2018-09-28|2018-09-28|INTERACTION SYSTEM AND METHOD IN VIRTUAL ENVIRONMENTS USING HAPTIC DEVICES|ES201830941A| ES2716012B2|2018-09-28|2018-09-28|INTERACTION SYSTEM AND METHOD IN VIRTUAL ENVIRONMENTS USING HAPTIC DEVICES| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|